Journals
  Publication Years
  Keywords
Search within results Open Search
Please wait a minute...
For Selected: Toggle Thumbnails
Aerial target identification method based on switching reasoning evidential network under incomplete information
Yu WANG, Zilin FAN, Tianjun REN, Xiaofei JI
Journal of Computer Applications    2023, 43 (4): 1071-1078.   DOI: 10.11772/j.issn.1001-9081.2022020287
Abstract208)   HTML7)    PDF (2178KB)(107)    PDF(mobile) (700KB)(6)    Save

Existing evidential reasoning methods have fixed model structure, single information processing mode and reasoning mechanism, making these methods difficult to be applied to target identification in an environment with a variety of incomplete information such as uncertain, error and missing information. To address this problem, a Switching Reasoning Evidential Network (SR-EN) method was proposed. Firstly, a multi-template network model was constructed considering evidence-node deletion and other situations. Then, conditional correlation between each evidence variable and target type was analyzed to establish an reasoning rule base for incomplete information. Finally, an intelligent spatio-temporal fusion reasoning method based on three evidence input and correction methods was proposed. Compared with traditional Evidential Network (EN) and combination methods of two information correction methods, such as EN and Technique for Order Preference by Similarity to an Ideal Solution (TOPSIS), SR-EN can achieve continuous and accurate identification for aerial targets under multiple types of random incomplete information while ensuring reasoning timeliness. Experimental results show that SR-EN can realize adaptive switching of evidence processing methods, network structures and fusion rules among nodes in continuous reasoning process through effective identification of various types of incomplete information.

Table and Figures | Reference | Related Articles | Metrics
Book spine segmentation algorithm based on improved DeepLabv3+ network
Xiaofei JI, Kexin ZHANG, Lirong TANG
Journal of Computer Applications    2023, 43 (12): 3927-3932.   DOI: 10.11772/j.issn.1001-9081.2022121887
Abstract317)   HTML6)    PDF (2364KB)(180)       Save

The location of books is one of the critical technologies to realize the intelligent development of libraries, and the accurate book spine segmentation algorithm has become a major challenge to achieve this goal. Based on the above solution, an improved book spine segmentation algorithm based on improved DeepLabv3+ network was proposed, aiming to solve the difficulties in book spine segmentation caused by dense arrangement, skew angles of books, and extremely similar book spine textures. Firstly, to extract more dense pyramid features of book images, the Atrous Spatial Pyramid Pooling (ASPP) in the original DeepLabv3+ network was replaced by the multi-dilation rate and multi-scale DenseASPP (Dense Atrous Spatial Pyramid Pooling) module. Secondly, to solve the problem of insensitivity of the original DeepLabv3+ network to the segmentation boundaries of objects with large aspect ratios, Strip Pooling (SP) module was added to the branch of the DenseASPP module to enhance the strip features of book spines. Finally, based on the Multi-Head Self-Attention (MHSA) mechanism in ViT (Vision Transformer), a global information enhancement-based self-attention mechanism was proposed to enhance the network’s ability to obtain long-distance features. The proposed algorithm was tested and compared on an open-source database, and the experimental results show that compared with the original DeepLabv3+ network segmentation algorithm, the proposed algorithm improves the Mean Intersection over Union (MIoU) by 1.8 percentage points on the nearly vertical book spine database and by 4.1 percentage points on the skewed book spine database, and the latter MIoU of the proposed algorithm achieves 93.3%. The above confirms that the proposed algorithm achieves accurate segmentation of book spine targets with certain skew angles, dense arrangement, and large aspect ratios.

Table and Figures | Reference | Related Articles | Metrics